在许多实际应用中,机器学习数据随着时间的流逝依次到达大块。然后,从业者必须决定如何分配其计算预算,以便在任何时间点获得最佳性能。凸优化的在线学习理论表明,最佳策略是在到达时立即使用数据。但是,这可能不是使用深度非线性网络时的最佳策略,尤其是当这些网络对每个数据进行多个数据进行多次通过时,呈现整体分布而非i.i.d ..在本文中,我们在最简单的情况下将此学习环境正式化。每个数据块都是从相同的基础分布中得出的,并首次尝试从经验回答以下问题:学习者在培训新来的块之前应该等待多长时间?学习者应该采用哪些架构?随着观察到更多的数据,学习者是否应该随着时间的推移增加能力吗?我们使用经典计算机视觉基准测试的卷积神经网络以及在大规模语言建模任务中训练的大型变压器模型进行探讨。代码可在\ url {www.github.com/facebookresearch/alma}中获得。
translated by 谷歌翻译
In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost. Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation. Second, we introduce a new metric measuring how quickly a learner acquires a new skill. Third, we propose an improved version of GEM (Lopez-Paz & Ranzato, 2017), dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC and other regularizationbased methods. Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration. Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency. 1
translated by 谷歌翻译
One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.
translated by 谷歌翻译